Discrete Adversarial Attack to Models of Code
نویسندگان
چکیده
The pervasive brittleness of deep neural networks has attracted significant attention in recent years. A particularly interesting finding is the existence adversarial examples, imperceptibly perturbed natural inputs that induce erroneous predictions state-of-the-art models. In this paper, we study a different type examples specific to code models, called discrete , which are created through program transformations preserve semantics original inputs.In particular, propose novel, general method highly effective attacking broad range From defense perspective, our primary contribution theoretical foundation for application training — most successful algorithm robust classifiers defending models against attack. Motivated by results, present simple realization substantially improves robustness attacks practice. We extensively evaluate both attack and methods. Results show significantly more than whether or not mechanisms place aid resisting attacks. addition, all evaluated widest margin as well own.
منابع مشابه
Learning to Attack: Adversarial Transformation Networks
With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direc...
متن کاملBirthday attack to discrete logarithm
The discrete logarithm in a finite group of large order has been widely applied in public key cryptosystem. In this paper, we will present a probabilistic algorithm for discrete logarithm.
متن کاملAttack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models
Recently researchers have proposed using deep learning-based systems for malware detection. Unfortunately, all deep learning classification systems are vulnerable to adversarial attacks where miscreants can avoid detection by the classification algorithm with very few perturbations of the input data. Previous work has studied adversarial attacks against static analysisbased malware classifiers ...
متن کاملFrom Code to Models
One of the corner stones of formal methods is the notion that abstraction enables analysis. By the construction of an abstract model we can trade implementation detail for analytical power. The intent of a model is to preserve selected characteristics of real-world artifact, while suppressing others. Unfortunately, practitioners are less likely to use a modeling tool if it cannot handle realwor...
متن کاملSAS Code: Joint Models for Continuous and Discrete Longitudinal Data
Since different distributions and link functions have to be used for the different outcomes, we use a special device available in the SAS procedure PROC GLIMMIX, i.e., the ‘byobs=(.)’ specification that can be used to specify both the distribution in the ‘dist=’ option and the link function in the ‘link=’ option. Thus, before we start with the main analysis, two variables need to be created to ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ACM on programming languages
سال: 2023
ISSN: ['2475-1421']
DOI: https://doi.org/10.1145/3591227